77 research outputs found

    Recovering Jointly Sparse Signals via Joint Basis Pursuit

    Get PDF
    This work considers recovery of signals that are sparse over two bases. For instance, a signal might be sparse in both time and frequency, or a matrix can be low rank and sparse simultaneously. To facilitate recovery, we consider minimizing the sum of the β„“1\ell_1-norms that correspond to each basis, which is a tractable convex approach. We find novel optimality conditions which indicates a gain over traditional approaches where β„“1\ell_1 minimization is done over only one basis. Next, we analyze these optimality conditions for the particular case of time-frequency bases. Denoting sparsity in the first and second bases by k1,k2k_1,k_2 respectively, we show that, for a general class of signals, using this approach, one requires as small as O(max⁑{k1,k2}log⁑log⁑n)O(\max\{k_1,k_2\}\log\log n) measurements for successful recovery hence overcoming the classical requirement of Θ(min⁑{k1,k2}log⁑(nmin⁑{k1,k2}))\Theta(\min\{k_1,k_2\}\log(\frac{n}{\min\{k_1,k_2\}})) for β„“1\ell_1 minimization when k1β‰ˆk2k_1\approx k_2. Extensive simulations show that, our analysis is approximately tight.Comment: 8 pages, 1 figure, submitted to ISIT 201

    New Null Space Results and Recovery Thresholds for Matrix Rank Minimization

    Get PDF
    Nuclear norm minimization (NNM) has recently gained significant attention for its use in rank minimization problems. Similar to compressed sensing, using null space characterizations, recovery thresholds for NNM have been studied in \cite{arxiv,Recht_Xu_Hassibi}. However simulations show that the thresholds are far from optimal, especially in the low rank region. In this paper we apply the recent analysis of Stojnic for compressed sensing \cite{mihailo} to the null space conditions of NNM. The resulting thresholds are significantly better and in particular our weak threshold appears to match with simulation results. Further our curves suggest for any rank growing linearly with matrix size nn we need only three times of oversampling (the model complexity) for weak recovery. Similar to \cite{arxiv} we analyze the conditions for weak, sectional and strong thresholds. Additionally a separate analysis is given for special case of positive semidefinite matrices. We conclude by discussing simulation results and future research directions.Comment: 28 pages, 2 figure

    Simple Error Bounds for Regularized Noisy Linear Inverse Problems

    Get PDF
    Consider estimating a structured signal x0\mathbf{x}_0 from linear, underdetermined and noisy measurements y=Ax0+z\mathbf{y}=\mathbf{A}\mathbf{x}_0+\mathbf{z}, via solving a variant of the lasso algorithm: x^=arg⁑min⁑x{βˆ₯yβˆ’Axβˆ₯2+Ξ»f(x)}\hat{\mathbf{x}}=\arg\min_\mathbf{x}\{ \|\mathbf{y}-\mathbf{A}\mathbf{x}\|_2+\lambda f(\mathbf{x})\}. Here, ff is a convex function aiming to promote the structure of x0\mathbf{x}_0, say β„“1\ell_1-norm to promote sparsity or nuclear norm to promote low-rankness. We assume that the entries of A\mathbf{A} are independent and normally distributed and make no assumptions on the noise vector z\mathbf{z}, other than it being independent of A\mathbf{A}. Under this generic setup, we derive a general, non-asymptotic and rather tight upper bound on the β„“2\ell_2-norm of the estimation error βˆ₯x^βˆ’x0βˆ₯2\|\hat{\mathbf{x}}-\mathbf{x}_0\|_2. Our bound is geometric in nature and obeys a simple formula; the roles of Ξ»\lambda, ff and x0\mathbf{x}_0 are all captured by a single summary parameter Ξ΄(Ξ»βˆ‚((f(x0)))\delta(\lambda\partial((f(\mathbf{x}_0))), termed the Gaussian squared distance to the scaled subdifferential. We connect our result to the literature and verify its validity through simulations.Comment: 6pages, 2 figur

    Sharp Time--Data Tradeoffs for Linear Inverse Problems

    Full text link
    In this paper we characterize sharp time-data tradeoffs for optimization problems used for solving linear inverse problems. We focus on the minimization of a least-squares objective subject to a constraint defined as the sub-level set of a penalty function. We present a unified convergence analysis of the gradient projection algorithm applied to such problems. We sharply characterize the convergence rate associated with a wide variety of random measurement ensembles in terms of the number of measurements and structural complexity of the signal with respect to the chosen penalty function. The results apply to both convex and nonconvex constraints, demonstrating that a linear convergence rate is attainable even though the least squares objective is not strongly convex in these settings. When specialized to Gaussian measurements our results show that such linear convergence occurs when the number of measurements is merely 4 times the minimal number required to recover the desired signal at all (a.k.a. the phase transition). We also achieve a slower but geometric rate of convergence precisely above the phase transition point. Extensive numerical results suggest that the derived rates exactly match the empirical performance
    • …
    corecore